Current Issue : April - June Volume : 2016 Issue Number : 2 Articles : 4 Articles
Background: A brain-machine interface (BMI) should be able to help people with\ndisabilities by replacing their lost motor functions. To replace lost functions, robot\narms have been developed that are controlled by invasive neural signals. Although\ninvasive neural signals have a high spatial resolution, non-invasive neural signals are\nvaluable because they provide an interface without surgery. Thus, various researchers\nhave developed robot arms driven by non-invasive neural signals. However, robot arm\ncontrol based on the imagined trajectory of a human hand can be more intuitive for\npatients. In this study, therefore, an integrated robot arm-gripper system (IRAGS) that is\ndriven by three-dimensional (3D) hand trajectories predicted from non-invasive neural\nsignals was developed and verified.\nMethods: The IRAGS was developed by integrating a six-degree of freedom robot arm\nand adaptive robot gripper. The system was used to perform reaching and grasping\nmotions for verification. The non-invasive neural signals, magnetoencephalography\n(MEG) and electroencephalography (EEG), were obtained to control the system. The 3D\ntrajectories were predicted by multiple linear regressions. A target sphere was placed\nat the terminal point of the real trajectories, and the system was commanded to grasp\nthe target at the terminal point of the predicted trajectories.\nResults: The average correlation coefficient between the predicted and real trajectories\nin the MEG case was 0.705 �± 0.292 (p < 0.001). In the EEG case, it was 0.684 �± 0.309\n(p < 0.001). The success rates in grasping the target plastic sphere were 18.75 and\n7.50 % with MEG and EEG, respectively. The success rates of touching the target were\n52.50 and 58.75 % respectively.\nConclusions: A robot arm driven by 3D trajectories predicted from non-invasive\nneural signals was implemented, and reaching and grasping motions were performed.\nIn most cases, the robot closely approached the target, but the success rate was not\nvery high because the non-invasive neural signal is less accurate. However the success\nrate could be sufficiently improved for practical applications by using additional sensors.\nRobot arm control based on hand trajectories predicted from EEG would allow for\nportability, and the performance with EEG was comparable to that with MEG....
The temporal contingency of feedback is an essential requirement of successful human computer\ninteractions. The timing of feedback not only affects the behavior of a user but is\nalso accompanied by changes in psychophysiology and neural activity. In three fMRI experiments\nwe systematically studied the impact of delayed feedback on brain activity while subjects\nperformed an auditory categorization task. In the first fMRI experiment, we analyzed\nthe effects of rare and thus unexpected delays of different delay duration on brain activity. In\nthe second experiment, we investigated if users can adapt to frequent delays. Therefore,\ndelays were presented as often as immediate feedback. In a third experiment, the influence\nof interaction outage was analyzed by measuring the effect of infrequent omissions of feedback\non brain activity. The results show that unexpected delays in feedback presentation\ncompared to immediate feedback stronger activate inter alia bilateral the anterior insular\ncortex, the posterior medial frontal cortex, the left inferior parietal lobule and the right inferior\nfrontal junction. The strength of this activation increases with the duration of the delay.\nThus, delays interrupt the course of an interaction and trigger an orienting response that in\nturn activates brain regions of action control. If delays occur frequently, users can adapt,\ndelays become expectable, and the brain activity in the observed network diminishes over\nthe course of the interaction. However, introducing rare omissions of expected feedback\nreduces the system�s trustworthiness which leads to an increase in brain activity not only in\nresponse to such omissions but also following frequently occurring and thus expected\ndelays....
We introduce a vision-based arm gesture recognition (AGR) system using Kinect. The AGR system learns the discrete Hidden\nMarkov Model (HMM), an effective probabilistic graph model for gesture recognition, from the dynamic pose of the arm joints\nprovided by the Kinect API. Because Kinect�s viewpoint and the subject�s arm length can substantially affect the estimated 3Dpose of\neach joint, it is difficult to recognize gestures reliably with these features. The proposed system performs the feature transformation\nthat changes the 3D Cartesian coordinates of each joint into the 2D spherical angles of the corresponding arm part to obtain view invariant\nand more discriminative features. We confirmed high recognition performance of the proposed AGR system through\nexperiments with two different datasets....
This paper describes a man-machine interface system using EOG and EMG. A manipulator control\nsystem using EOG and EMG is developed according to EOG and EMG. With the eye movement, the\nsystem enabled us to control a manipulator. EOG is using for moving the robot joint angles and\nEMG is using for object grasping. The EOG and EMG discrimination method is used to control the\nrobot. The robot arm joint movements are determined by the EOG discrimination method where\nthe polarity of eye gaze motion signals in each Ch1 and Ch2. The EMG discrimination method is\nused to control arm gripper to grasp and release the target object. In the robot control experiment,\nwe are successfully control the uArmTM robot by using both EOG and EMG discrimination method\nas the control input. This control system brings the feasibility of man-machine interface for elderly\nperson and handicapped person....
Loading....